Challenges for FAIR Digital Object Assessment

نویسندگان

چکیده

A Digital Object (DO) "is a sequence of bits, incorporating work or portion other information in which party has rights interests, there is value". DOs should have persistent identifiers, meta-data and be readable by both humans machines. FAIR DO able to interact with automated data processing systems (De Smedt et al. 2020) while following the (Findable, Accessible, Interoperable Reusable principles) principles (Wilkinson 2016). Although was originally targeted towards artifacts, new initiatives emerged adapt research digital resources such as software (Katz 2021) (Lamprecht 2020), ontologies (Poveda-Villalón virtual environments even (Collins 2018). In this paper, we describe challenges when assessing level compliance (i.e., its FAIRness), assuming that contains multiple captures their relationships. We explore different methods calculate an evaluation score, discuss challeneges importance providing explanations guidelines for authors. assessment tools There are growing number used assess FAIRness DOs. Community groups like FAIRassist.org compiled lists resources. These range from self questionnaires checklists semi-automated validators (Devaraju 2021). Examples validation include F-UJI Automated Data Assessment Tool Huber Evaluator Checker datasets individual DOs; HowFairIs (Spaaks code repositories; FOOPS (Garijo ontologies. When it comes FDOs, find two main challenges: Resource score discrepancy : Different same type resource produce scores. For example, recent study over showcases differences scores due how interpreted authors (Dumontier 2022). Heterogeneous FDO metadata Validators tests object. However, no agreed schema represent metadata, complicates operation. addition, may specific certain domain 2020). To address challenge, need i) agree on minimum common set measure ii) propose framework extensions specialized objects (datasets, software, ontologies, VRE, etc.). 2019), community-driven objects. This based on: collection maturity indicators, principle tests, module apply those The proposed indicators starting point define needed each (de Miranda Azevedo Dumontier Aggregation metrics Another challenge best way FDO, independently run it. four dimensions Reusable) usually associated tests. If final presented then default some more than others. Similarly, not all (e.g., cases having license considered important full documentation). our consider aggregation resources, therefore face additional creating aggregated whole FDO. scores: Global calculated formula (see Fig. 1-1). It represents percentage total passed doesn’t take into account test belongs. average 1- 2). ratios plus ratio evaluate Research itself. Both agnostic kind analyzed. they ranges [0 - 100]. Discussion records Some DOs, others DO. makes difficult "F2: "data described rich metadata". Therefore, believe discussion minimal addressed community. can change significantly depending aggregating metrics. key explain users method provenance score. communities scoring mechanism e.g., adding weight figuring out right principle, give objective system ranking, but become improve

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Quality Assessment in Digital Libraries - Challenges and Chances

Today, more and more information provider such as digital libraries offer corpora related to a specialized domain. Beyond simple keyword based searches the resulting information systems often rely on entity centered searches. For being able to offer this kind of search, a high quality document processing is essential. In addition, information systems more and more have to rely on semantic techn...

متن کامل

diagnostic and developmental potentials of dynamic assessment for writing skill

این پایان نامه بدنبال بررسی کاربرد ارزیابی مستمر در یک محیط یادگیری زبان دوم از طریق طرح چهار سوال تحقیق زیر بود: (1) درک توانایی های فراگیران زمانیکه که از طریق برآورد عملکرد مستقل آنها امکان پذیر نباشد اما در طول جلسات ارزیابی مستمر مشخص شوند; (2) امکان تقویت توانایی های فراگیران از طریق ارزیابی مستمر; (3) سودمندی ارزیابی مستمر در هدایت آموزش فردی به سمتی که به منطقه ی تقریبی رشد افراد حساس ا...

15 صفحه اول

Optimistic Fair Exchange of Digital Signatures Optimistic Fair Exchange of Digital Signatures

LIMITED DISTRIBUTION NOTICE This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents and will be distributed outside of IBM up to one year after the date indicated at the top of this page. In view of the transfer of copyright to the outside publisher...

متن کامل

Challenges for Digital Library Evaluation

While there were many efforts in research and practice of digital libraries, evaluation was not a conspicuous activity. It is well recognized that digital library evaluation is a complex and difficult undertaking. We enumerate the challenges facing digital library evaluation and suggest a conceptual framework for evaluation, A review of evaluation efforts in research and practice concentrates o...

متن کامل

Digital Rights Management and Fair Use

The recent controversies surrounding Amazon's removal of George Orwell's '1984' from Kindle readers, the BBC's proposal to encrypt Freeview High Definition (HD) content and Microsoft's permanent ban on Xbox Live users who have modified or 'chipped' their consoles have all served to highlight the debate over Digital Rights Management (DRM) and digital copyright. In the past, DRM measures have be...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Research Ideas and Outcomes

سال: 2022

ISSN: ['2367-7163']

DOI: https://doi.org/10.3897/rio.8.e95943